75 research outputs found
Planting a SEED of Vision in Large Language Model
We present SEED, an elaborate image tokenizer that empowers Large Language
Models (LLMs) with the emergent ability to SEE and Draw at the same time.
Research on image tokenizers has previously reached an impasse, as frameworks
employing quantized visual tokens have lost prominence due to subpar
performance and convergence in multimodal comprehension (compared to BLIP-2,
etc.) or generation (compared to Stable Diffusion, etc.). Despite the
limitations, we remain confident in its natural capacity to unify visual and
textual representations, facilitating scalable multimodal training with LLM's
original recipe. In this study, we identify two crucial principles for the
architecture and training of SEED that effectively ease subsequent alignment
with LLMs. (1) Image tokens should be independent of 2D physical patch
positions and instead be produced with a 1D causal dependency, exhibiting
intrinsic interdependence that aligns with the left-to-right autoregressive
prediction mechanism in LLMs. (2) Image tokens should capture high-level
semantics consistent with the degree of semantic abstraction in words, and be
optimized for both discriminativeness and reconstruction during the tokenizer
training phase. As a result, the off-the-shelf LLM is able to perform both
image-to-text and text-to-image generation by incorporating our SEED through
efficient LoRA tuning. Comprehensive multimodal pretraining and instruction
tuning, which may yield improved results, are reserved for future
investigation. This version of SEED was trained in 5.7 days using only 64 V100
GPUs and 5M publicly available image-text pairs. Our preliminary study
emphasizes the great potential of discrete visual tokens in versatile
multimodal LLMs and the importance of proper image tokenizers in broader
research.Comment: Technical Report; Project released at:
https://github.com/AILab-CVC/SEE
SEED-Bench: Benchmarking Multimodal LLMs with Generative Comprehension
Based on powerful Large Language Models (LLMs), recent generative Multimodal
Large Language Models (MLLMs) have gained prominence as a pivotal research
area, exhibiting remarkable capability for both comprehension and generation.
In this work, we address the evaluation of generative comprehension in MLLMs as
a preliminary step towards a comprehensive assessment of generative models, by
introducing a benchmark named SEED-Bench. SEED-Bench consists of 19K multiple
choice questions with accurate human annotations (x 6 larger than existing
benchmarks), which spans 12 evaluation dimensions including the comprehension
of both the image and video modality. We develop an advanced pipeline for
generating multiple-choice questions that target specific evaluation
dimensions, integrating both automatic filtering and manual verification
processes. Multiple-choice questions with groundtruth options derived from
human annotation enables an objective and efficient assessment of model
performance, eliminating the need for human or GPT intervention during
evaluation. We further evaluate the performance of 18 models across all 12
dimensions, covering both the spatial and temporal understanding. By revealing
the limitations of existing MLLMs through evaluation results, we aim for
SEED-Bench to provide insights for motivating future research. We will launch
and consistently maintain a leaderboard to provide a platform for the community
to assess and investigate model capability.Comment: Technical Report; Project released at:
https://github.com/AILab-CVC/SEED-Benc
Learning Transferable Spatiotemporal Representations from Natural Script Knowledge
Pre-training on large-scale video data has become a common recipe for
learning transferable spatiotemporal representations in recent years. Despite
some progress, existing methods are mostly limited to highly curated datasets
(e.g., K400) and exhibit unsatisfactory out-of-the-box representations. We
argue that it is due to the fact that they only capture pixel-level knowledge
rather than spatiotemporal commonsense, which is far away from cognition-level
video understanding. Inspired by the great success of image-text pre-training
(e.g., CLIP), we take the first step to exploit language semantics to boost
transferable spatiotemporal representation learning. We introduce a new pretext
task, Turning to Video for Transcript Sorting (TVTS), which sorts shuffled ASR
scripts by attending to learned video representations. We do not rely on
descriptive captions and learn purely from video, i.e., leveraging the natural
transcribed speech knowledge to provide noisy but useful semantics over time.
Furthermore, rather than the simple concept learning in vision-caption
contrast, we encourage cognition-level temporal commonsense reasoning via
narrative reorganization. The advantages enable our model to contextualize what
is happening like human beings and seamlessly apply to large-scale uncurated
video data in the real world. Note that our method differs from ones designed
for video-text alignment (e.g., Frozen) and multimodal representation learning
(e.g., Merlot). Our method demonstrates strong out-of-the-box spatiotemporal
representations on diverse video benchmarks, e.g., +13.6% gains over VideoMAE
on SSV2 via linear probing
Self-Play and Self-Describe: Policy Adaptation with Vision-Language Foundation Models
Recent progress on vision-language foundation models have brought significant
advancement to building general-purpose robots. By using the pre-trained models
to encode the scene and instructions as inputs for decision making, the
instruction-conditioned policy can generalize across different objects and
tasks. While this is encouraging, the policy still fails in most cases given an
unseen task or environment. To adapt the policy to unseen tasks and
environments, we explore a new paradigm on leveraging the pre-trained
foundation models with Self-PLAY and Self-Describe (SPLAYD). When deploying the
trained policy to a new task or a new environment, we first let the policy
self-play with randomly generated instructions to record the demonstrations.
While the execution could be wrong, we can use the pre-trained foundation
models to accurately self-describe (i.e., re-label or classify) the
demonstrations. This automatically provides new pairs of
demonstration-instruction data for policy fine-tuning. We evaluate our method
on a broad range of experiments with the focus on generalization on unseen
objects, unseen tasks, unseen environments, and sim-to-real transfer. We show
SPLAYD improves baselines by a large margin in all cases. Our project page is
available at https://geyuying.github.io/SPLAYD/Comment: Project page: https://geyuying.github.io/SPLAYD
GNFactor: Multi-Task Real Robot Learning with Generalizable Neural Feature Fields
It is a long-standing problem in robotics to develop agents capable of
executing diverse manipulation tasks from visual observations in unstructured
real-world environments. To achieve this goal, the robot needs to have a
comprehensive understanding of the 3D structure and semantics of the scene. In
this work, we present , a visual behavior cloning agent for
multi-task robotic manipulation with eneralizable eural
feature ields. GNFactor jointly optimizes a generalizable neural
field (GNF) as a reconstruction module and a Perceiver Transformer as a
decision-making module, leveraging a shared deep 3D voxel representation. To
incorporate semantics in 3D, the reconstruction module utilizes a
vision-language foundation model (, Stable Diffusion) to distill
rich semantic information into the deep 3D voxel. We evaluate GNFactor on 3
real robot tasks and perform detailed ablations on 10 RLBench tasks with a
limited number of demonstrations. We observe a substantial improvement of
GNFactor over current state-of-the-art methods in seen and unseen tasks,
demonstrating the strong generalization ability of GNFactor. Our project
website is https://yanjieze.com/GNFactor/ .Comment: CoRL 2023 Oral. Website: https://yanjieze.com/GNFactor
Retrieving-to-Answer: Zero-Shot Video Question Answering with Frozen Large Language Models
Video Question Answering (VideoQA) has been significantly advanced from the
scaling of recent Large Language Models (LLMs). The key idea is to convert the
visual information into the language feature space so that the capacity of LLMs
can be fully exploited. Existing VideoQA methods typically take two paradigms:
(1) learning cross-modal alignment, and (2) using an off-the-shelf captioning
model to describe the visual data. However, the first design needs costly
training on many extra multi-modal data, whilst the second is further limited
by limited domain generalization. To address these limitations, a simple yet
effective Retrieving-to-Answer (R2A) framework is proposed.Given an input
video, R2A first retrieves a set of semantically similar texts from a generic
text corpus using a pre-trained multi-modal model (e.g., CLIP). With both the
question and the retrieved texts, a LLM (e.g., DeBERTa) can be directly used to
yield a desired answer. Without the need for cross-modal fine-tuning, R2A
allows for all the key components (e.g., LLM, retrieval model, and text corpus)
to plug-and-play. Extensive experiments on several VideoQA benchmarks show that
despite with 1.3B parameters and no fine-tuning, our R2A can outperform the 61
times larger Flamingo-80B model even additionally trained on nearly 2.1B
multi-modal data
LncRNA-42060 Regulates Tamoxifen Sensitivity and Tumor Development via Regulating the miR-204-5p/SOX4 Axis in Canine Mammary Gland Tumor Cells
Tamoxifen is the drug of choice for endocrine therapy of breast cancer. Its clinical use is limited by the development of drug resistance. There is increasing evidence that long non-coding RNAs (lncRNAs) are associated with tumor drug resistance. Therefore, we established two TAM-resistant cell lines, CHMpTAM and CHMmTAM. The different expression levels of lncRNA and miRNA in CHMmTAM and CHMm were screened by RNA sequencing, and the lncRNA-miRNA interactions were analyzed. LncRNA ENSCAFG42060 (lnc-42060) was found to be significantly upregulated in drug-resistant cells and tumor tissues. Further functional validation revealed that the knockdown of lnc-42060 inhibited proliferation, migration, clone formation, restoration of TAM sensitivity, and reduction of stem cell formation in drug-resistant cells, whereas overexpression of lnc-4206 showed opposite results. Bioinformatics and dual-luciferase reporter gene assays confirmed that lnc-42060 could act as a sponge for miR-204-5p, further regulating SOX4 expression activity and thus influencing tumor cell progression. In conclusion, we screened lncRNAs and miRNAs associated with TAM resistance in canine mammary gland tumor cells for the first time. lnc-42060 served as a novel marker that may be used as an important biomarker for future diagnosis and treatment
- …